This work studies the task of automatic emotion detection in music. Music may evoke more than one different\r\nemotion at the same time. Single-label classification and regression cannot model this multiplicity. Therefore, this\r\nwork focuses on multi-label classification approaches, where a piece of music may simultaneously belong to more\r\nthan one class. Seven algorithms are experimentally compared for this task. Furthermore, the predictive power of\r\nseveral audio features is evaluated using a new multi-label feature selection method. Experiments are conducted\r\non a set of 593 songs with six clusters of emotions based on the Tellegen-Watson-Clark model of affect. Results\r\nshow that multi-label modeling is successful and provide interesting insights into the predictive quality of the\r\nalgorithms and features.
Loading....